Multi-Agent Systems Lab

Interactive Educational Platform for Swarm Robotics & Distributed Control - Explore Consensus, Formation Control, and Cooperative Behaviors

Swarm Formations & Cooperative Control

Explore various multi-agent formations, consensus algorithms, and distributed control strategies. Agents coordinate using neighbor-to-neighbor (graph-based) feedback laws to achieve agreement and formation objectives; a CBF-based safety filter can optionally modify controls to prevent collisions. Use the dropdown to select different formation challenges and consensus methods.

45
Min Distance
0
Active CBFs
0
Collisions

Networked Multi-Agent Systems Mathematics:

Agent Model (typical in these demos):

$$\dot{\mathbf{p}}_i = \mathbf{u}_i, \quad i \in \{1,\dots,N\}, \quad \mathbf{p}_i \in \mathbb{R}^2$$

Each agent chooses a local control input $\mathbf{u}_i$ using only neighbor information.

Communication Graph:

$$\mathcal{G}=(\mathcal{V},\mathcal{E}),\; \mathcal{V}=\{1,\dots,N\},\; (i,j)\in\mathcal{E} \Leftrightarrow j\in\mathcal{N}_i$$ $$\mathbf{A}=[a_{ij}],\; \mathbf{D}=\mathrm{diag}\!\left(\sum_j a_{ij}\right),\; \mathbf{L}=\mathbf{D}-\mathbf{A}$$

The Laplacian $\mathbf{L}$ encodes who talks to whom; connectivity is reflected by $\lambda_2(\mathbf{L})$ for undirected graphs.

Consensus (agreement):

$$\mathbf{u}_i^{\mathrm{nom}} = -\sum_{j\in\mathcal{N}_i} a_{ij}\,(\mathbf{p}_i-\mathbf{p}_j) \quad \Longleftrightarrow \quad \dot{\mathbf{p}} = -(\mathbf{L}\otimes \mathbf{I}_2)\,\mathbf{p}$$ $$\mathbf{e} = \bigl(\mathbf{I}_N - \tfrac{1}{N}\mathbf{1}\mathbf{1}^\top\bigr)\otimes \mathbf{I}_2\;\mathbf{p}, \quad \|\mathbf{e}\|\to 0\ \text{if }\mathcal{G}\text{ is connected (undirected)}$$

Formation (shape via relative offsets):

$$V(\mathbf{p}) = \tfrac{1}{2}\sum_{(i,j)\in\mathcal{E}}\left\| (\mathbf{p}_i-\mathbf{p}_j) - \mathbf{r}_{ij} \right\|^2$$ $$\mathbf{u}_i^{\mathrm{nom}} = -\nabla_{\mathbf{p}_i} V = -\sum_{j\in\mathcal{N}_i}\left((\mathbf{p}_i-\mathbf{p}_j)-\mathbf{r}_{ij}\right)$$

Different formations correspond to different desired edge vectors $\mathbf{r}_{ij}$.

Safety Layer (secondary): collision avoidance as a CBF filter:

$$h_{ij}(\mathbf{p}_i,\mathbf{p}_j)=\|\mathbf{p}_i-\mathbf{p}_j\|^2-d_{safe}^2 \ge 0,\quad \dot{h}_{ij}+\alpha h_{ij}\ge 0$$ $$\begin{align} \min_{\mathbf{u}_i}\ &\|\mathbf{u}_i-\mathbf{u}_i^{\mathrm{nom}}\|^2\\ \mathrm{s.t.}\ &\dot{h}_{ij}(\mathbf{p},\mathbf{u})+\alpha h_{ij}(\mathbf{p})\ge 0,\ \forall j\in\mathcal{N}_i \end{align}$$

Takeaway:
• The “core” behavior is graph-based ($\mathbf{L}$, neighbor sets, agreement/formation objectives)
• Safety constraints (CBFs) act as a minimally-invasive filter on top of nominal coordination laws

Multi-Agent Connectivity with MCBF

Drag agents or click to set waypoints. Agents maintain connectivity via MCBF while tracking goals.
0.00
λ₂(L) (Fiedler)
0.00
λ_min(H)
CONNECTED
Network Status
Active Links

Comparison to Scalar CBF: Scalar eigenvalue-based CBFs suffer from control chattering when eigenvalues merge (e.g., λ2 and λ3 become equal). The MCBF formulation avoids this by working directly with the matrix \(\mathbf{H}\), using Schur decomposition to handle multiplicities smoothly via orthogonal eigenspace projectors.

Matrix CBFs for Network Connectivity

Introduction

A robot team stays connected when information can flow across the whole group. Instead of directly constraining a single eigenvalue (which can be numerically tricky), Matrix CBFs keep a whole matrix positive semidefinite, yielding smooth controls even when eigenvalues merge.

  • What you'll learn: the Laplacian's role in connectivity, and how semidefinite constraints avoid chatter near eigenvalue crossings.
  • Mental model: imagine agents connected by springs; the MCBF keeps the "network stiffness" nonnegative so the structure doesn't tear.
  • Try this: reduce communication range R and disperse agents; increase c_α or ε to make connectivity more robust; change #Agents.
  • Common gotchas: if R is too small, connectivity may be physically impossible; adjust speed or targets to remain feasible.

Laplacian Matrix: \(\mathbf{L}(\mathbf{x}) = \mathbf{D}(\mathbf{x}) - \mathbf{A}(\mathbf{x})\)

where \(\mathbf{A}_{ij}\) is the adjacency weight and \(\mathbf{D}_{ii} = \sum_j \mathbf{A}_{ij}\).

Connectivity via Fiedler Eigenvalue:

Network is connected \(\Leftrightarrow \lambda_2(\mathbf{L}(\mathbf{x})) > 0\)

MCBF Construction:

\(\mathbf{H}(\mathbf{x}) = \mathbf{L}(\mathbf{x}) + \frac{\varepsilon}{p} \mathbf{1}\mathbf{1}^\top - \varepsilon \mathbf{I}\)

Then \(\mathbf{H} \succeq 0\) ensures connectivity with margin \(\varepsilon > 0\).

Multi-agent network connectivity can be maintained using MCBFs without the nonsmoothness issues of scalar eigenvalue-based CBFs. This demo shows agents maintaining communication links while tracking references. The MCBF formulation handles eigenvalue merging (when multiple eigenvalues become equal) continuously via Schur decomposition.

Semidefinite MCBF constraint:

With \(\mathbf{H}(\mathbf{x})\) as above, enforce

$$\dot{\mathbf{H}}(\mathbf{x},\mathbf{u}) + c_\alpha\,\mathbf{H}(\mathbf{x}) \succeq \delta\,\mathbf{I}.$$

Projecting a nominal control onto this convex cone uses the same eigen-gradient idea:

$$\frac{\partial\,\lambda_{\min}}{\partial u_j} = \mathbf{v}^T\,\frac{\partial}{\partial u_j}\big(\dot{\mathbf{H}} + c_\alpha\,\mathbf{H}\big)\,\mathbf{v},\quad \lambda_{\min}\big(\dot{\mathbf{H}} + c_\alpha\mathbf{H}\big) \ge \delta.$$

This avoids the non-smoothness of directly constraining $\lambda_2(\mathbf{L})$ and yields smooth, chatter-free controls.

References

  • P. Ong, Y. Xu, R. M. Bena, F. Jabbari, and A. D. Ames, "œMatrix Control Barrier Functions," arXiv preprint arXiv:2508.11795, 2025.